-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make pubsub example snippets testable. #1690
Make pubsub example snippets testable. #1690
Conversation
.. doctest:: | ||
.. literalinclude:: pubsub_snippets.py | ||
:start-after: [START client_list_topics] | ||
:end-before: [END client_list_topics] |
This comment was marked as spam.
This comment was marked as spam.
Sorry, something went wrong.
I fixed the lint errors and squashed. |
|
Rebased to fix merge conflicts. |
@tseaver What's the status here? Just deciding what we want? |
@dhermes From my perspective, the PR is ready to merge: I'm pleased with the end-to-end testability of the snippets. |
OK so @jgeewax can we try to reach a resolution? My preference is to remove the @tseaver One note I keep forgetting to mention, running the examples as system tests isn't particularly crucial. However, mocking out some parts (e.g. env. sniffing when constructing a client) would be painful to do with We could "meet in the middle" and have a script which extracts snippets from docstrings and writes them to a file and then fail if the generated files don't match what we expect them to look like? |
Hmm, ensuring they don't bitrot as the API fixes bugs, etc, seems like a win to me: the code that shows up in examples demonstrably works, rather than just compiling without errors. I still haven't heard anything defending the need for snippets at the REPL prompt, rather than expecting users to read our fine manual. |
I'm saying running them as unit tests with some mocks/stubs should be just fine. Full set-up as system tests may be overkill. |
I'd be willing to bet there would be more code involved setting up the mocks needed to run them without talking to the API: the mocks would have to preserve / update expected state across multiple requests, which our current mocking patterns don't handle well. FWIW, as they stand in this PR, the Pubsub snippets are actually have better API coverage than our current system tests. |
That's just a statement about poor coverage of the system tests. |
It is a statement about the system test coverage, sure: needing a tested snippet for the uncovered I'm puzzled that you don't weigh the "snippets actually work against the real API" consideration higher: if we had that pattern in place for all our APIs, we would get many fewer user-driven bug reports about the docs. |
@tseaver We've got the thumbs up from @jgeewax to use the literal includes and make sure our docs look good when Sphinx-generated. Now we need to make sure we are square on this approach. Some things I think still need to be addressed (maybe not in this PR, but in a follow-up):
I think the easiest way to get this PR in is just to drop a lot of the features in the snippets file that make it runnable and then build up in new PRs something we can agree on and is extensible. So the core change in this PR would be in using the literal includes from an actual Python file. |
@tseaver Bump |
I've rebased to fix conflicts. I will look at your other points shortly. |
@dhermes I can't repro the Travis failures (or skips) locally: not only that, but the hash for the failing test (06b9815) doesn't match the hash at the tip of my branch (b587a66). |
Did you create a merge commit? Even if you did, it's unclear why it isn't showing up in the PR. |
The commit hash that Travis wants to test isn't even on my branch. |
Maybe I should just close this PR and open a new one for my branch? |
I'd create a fresh branch locally from |
But do whatever you're comfortable with. As it were, I suggested you just rip out the snippets into literal includes for now and then we have a design discussion around how to run the snippets. |
I just rebased again from the current head. |
@tseaver I never gave the LGTM for the merge. I suppose the "do whatever you're comfortable with" may have been confusing. It was in reference to your |
I opened #1744 to track your points about how the snippets should be exercised. |
Toward #212, #1141, etc.
@dhermes, @jgeewax Not-yet-complete transformation, but worth discussing.
I'm basically ripping off the snippet format from @pcostell.
The snippets can be tested by running the file as a script, under an appropriately configured environment: